Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dramatic camera movement. We tackle this challenging problem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging camera trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation accuracy.
translated by 谷歌翻译
在医学图像分析中需要进行几次学习的能力是对支持图像数据的有效利用,该数据被标记为对新类进行分类或细分新类,该任务否则需要更多的培训图像和专家注释。这项工作描述了一种完全3D原型的几种分段算法,因此,训练有素的网络可以有效地适应培训中缺乏的临床有趣结构,仅使用来自不同研究所的几个标记图像。首先,为了弥补机构在新型类别的情节适应中的广泛认识的空间变异性,新型的空间注册机制被整合到原型学习中,由分割头和空间对齐模块组成。其次,为了帮助训练观察到的不完美比对,提出了支持掩模调节模块,以进一步利用支持图像中可用的注释。使用589个骨盆T2加权MR图像的数据集分割了八个对介入计划的解剖结构的应用,该实验是针对介入八个机构的八个解剖结构的应用。结果证明了3D公式中的每种,空间登记和支持掩模条件的功效,所有这些条件都独立或集体地做出了积极的贡献。与先前提出的2D替代方案相比,不管支持数据来自相同还是不同的机构,都具有统计学意义的少量分割性能。
translated by 谷歌翻译
我们引入了一个相机重新定位管道,该管道结合了绝对姿势回归(APR)和直接功能匹配。通过结合曝光自适应的新视图综合,我们的方法成功地解决了现有基于光度法方法无法处理的室外环境中的光度扭曲。借助域不变的功能匹配,我们的解决方案通过对未标记数据的半监督学习提高了姿势回归精度。特别是,该管道由两个组成部分组成:新型视图合成器和DFNET。前者综合了新的视图,以补偿暴露的变化,后者会回归摄像头的姿势,并提取了可靠的功能,这些特征弥补了真实图像和合成图像之间的域间隙。此外,我们引入了在线合成数据生成方案。我们表明,这些方法有效地增强了室内和室外场景中的相机姿势估计。因此,我们的方法通过优于现有的单位图APR方法高达56%,可与基于3D结构的方法相当。
translated by 谷歌翻译
In deep learning, transfer learning (TL) has become the de facto approach when dealing with image related tasks. Visual features learnt for one task have been shown to be reusable for other tasks, improving performance significantly. By reusing deep representations, TL enables the use of deep models in domains with limited data availability, limited computational resources and/or limited access to human experts. Domains which include the vast majority of real-life applications. This paper conducts an experimental evaluation of TL, exploring its trade-offs with respect to performance, environmental footprint, human hours and computational requirements. Results highlight the cases were a cheap feature extraction approach is preferable, and the situations where an expensive fine-tuning effort may be worth the added cost. Finally, a set of guidelines on the use of TL are proposed.
translated by 谷歌翻译
传统上,来自摆姿势的图像的3D室内场景重建分为两个阶段:人均深度估计,然后进行深度合并和表面重建。最近,出现了一个直接在最终3D体积特征空间中进行重建的方法家族。尽管这些方法显示出令人印象深刻的重建结果,但它们依赖于昂贵的3D卷积层,从而限制了其在资源受限环境中的应用。在这项工作中,我们回到了传统的路线,并展示着专注于高质量的多视图深度预测如何使用简单的现成深度融合来高度准确的3D重建。我们提出了一个简单的最先进的多视图深度估计器,其中有两个主要贡献:1)精心设计的2D CNN,该2D CNN利用强大的图像先验以及平面扫描特征量和几何损失,并结合2)将密钥帧和几何元数据集成到成本量中,这允许知情的深度平面评分。我们的方法在当前的最新估计中获得了重要的领先优势,以进行深度估计,并在扫描仪和7个镜头上进行3D重建,但仍允许在线实时实时低音重建。代码,模型和结果可在https://nianticlabs.github.io/simplerecon上找到
translated by 谷歌翻译
当前的类不足的计数方法可以推广到看不见的类,但通常需要参考图像来定义要计数的对象的类型以及训练过程中实例注释。无参考的类别计数是一个新兴领域,可以将计算为重复识别任务。这种方法有助于计算不变的集合成分。我们表明,具有全局上下文的一般特征空间可以列举图像中的实例,而无需对象类型。具体而言,我们证明了视觉变压器功能的回归没有点级监督或参考图像优于其他无参考方法,并且具有使用参考图像的方法竞争。我们在当前的标准数量计数数据集FSC-147上显示这一点。我们还提出了一个改进的数据集FSC-133,该数据集消除了FSC-147的错误,模棱两可和重复的图像,并在其上表现出相似的性能。据我们所知,我们是第一个弱监督的无参考阶级计数方法。
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Selecting the number of topics in LDA models is considered to be a difficult task, for which alternative approaches have been proposed. The performance of the recently developed singular Bayesian information criterion (sBIC) is evaluated and compared to the performance of alternative model selection criteria. The sBIC is a generalization of the standard BIC that can be implemented to singular statistical models. The comparison is based on Monte Carlo simulations and carried out for several alternative settings, varying with respect to the number of topics, the number of documents and the size of documents in the corpora. Performance is measured using different criteria which take into account the correct number of topics, but also whether the relevant topics from the DGPs are identified. Practical recommendations for LDA model selection in applications are derived.
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
Artificial Intelligence (AI) and Machine Learning (ML) are weaving their way into the fabric of society, where they are playing a crucial role in numerous facets of our lives. As we witness the increased deployment of AI and ML in various types of devices, we benefit from their use into energy-efficient algorithms for low powered devices. In this paper, we investigate a scale and medium that is far smaller than conventional devices as we move towards molecular systems that can be utilized to perform machine learning functions, i.e., Molecular Machine Learning (MML). Fundamental to the operation of MML is the transport, processing, and interpretation of information propagated by molecules through chemical reactions. We begin by reviewing the current approaches that have been developed for MML, before we move towards potential new directions that rely on gene regulatory networks inside biological organisms as well as their population interactions to create neural networks. We then investigate mechanisms for training machine learning structures in biological cells based on calcium signaling and demonstrate their application to build an Analog to Digital Converter (ADC). Lastly, we look at potential future directions as well as challenges that this area could solve.
translated by 谷歌翻译